Goto

Collaborating Authors

 training node



5975754c7650dfee0682e06e1fec0522-Supplemental-Conference.pdf

Neural Information Processing Systems

Both models consist of 2 layers and the hidden dimension is fixed to 64. We add a weight decay of 5e-4 for Cora, Citeseer, and Pubmed,and0fortherest. The optimizer configuration and the training schedule are the same as Section A.2. Kh(c ˆci) (7) where i N V denotes the evaluated node, andh is the bandwidth of the kernel function. The classwise-ECEs are summarized in Table 3, and the KDE-ECEs are collected in Table 4. Weadopt a heuristic which proportionally rescales the non top-1 output probabilities so that the calibrated probabilistic output sums up to one. While the ECEs ofCaGCN inits original paper are promising [23], we observethat the ECEs of CaGCN are often unstable and sometimes even worse than that of the uncalibrated model in our experiments.



Subgroup Generalization and Fairness of Graph Neural Networks

Neural Information Processing Systems

Based on this analysis, our second contribution is the discovering of a type of unfairness that arises from theoretically predictable accuracy disparity across some subgroups of test nodes.


Gradient Rewiring for Editable Graph Neural Network Training

Neural Information Processing Systems

Deep neural networks are ubiquitously adopted in many applications, such as computer vision, natural language processing, and graph analytics. However, well-trained neural networks can make prediction errors after deployment as the world changes.


MG-HGNN: A Heterogeneous GNN Framework for Indoor Wi-Fi Fingerprint-Based Localization

Wang, Yibu, Zhang, Zhaoxin, Li, Ning, Zhao, Xinlong, Zhao, Dong, Zhao, Tianzi

arXiv.org Artificial Intelligence

Abstract--Received signal strength indicator (RSSI) is the primary representation of Wi-Fi fingerprints and serves as a crucial tool for indoor localization. However, existing RSSI-based positioning methods often suffer from reduced accuracy due to environmental complexity and challenges in processing multi-source information. T o address these issues, we propose a novel multi-graph heterogeneous GNN framework (MG-HGNN) to enhance spatial awareness and improve positioning performance. In this framework, two graph construction branches perform node and edge embedding, respectively, to generate informative graphs. Subsequently, a heterogeneous graph neural network is employed for graph representation learning, enabling accurate positioning. The MG-HGNN framework introduces the following key innovations: 1) multi-type task-directed graph construction that combines label estimation and feature encoding for richer graph information; 2) a heterogeneous GNN structure that enhances the performance of conventional GNN models. Evaluations on the UJIIndoorLoc and UTSIndoorLoc public datasets demonstrate that MG-HGNN not only achieves superior performance compared to several state-of-the-art methods, but also provides a novel perspective for enhancing GNN-based localization methods. Ablation studies further confirm the rationality and effectiveness of the proposed framework. Index T erms--Fingerprint-based localization, graph neural network, heterogeneous network, received signal strength indicator (RSSI). NDOOR localization technologies aim to estimate the position of mobile users or devices in indoor environments where satellite-based systems such as GPS are ineffective [1]. Over the past decade, a variety of wireless indoor localization techniques have been developed based on different sensing modalities, including Bluetooth Low Energy (BLE) [2], Ultra Wideband (UWB) [3], Radio Frequency Identification (RFID) [4], magnetic field sensing [5], and Wi-Fi [6], [7]. Among them, Wi-Fi based localization has attracted a lot of attention due to the ubiquity of Wi-Fi infrastructure, low deployment cost, and compatibility with existing mobile devices without requiring additional hardware [1]. This work has been submitted to the IEEE for possible publication. This work is supported by the National Key Research and Development Program of China [Grant No. 2024QY1103], the Shandong Provincial Natural Science Foundation, China [Grant No. ZR2024QF138].(Corresponding Yibu Wang, Zhaoxin Zhang, Ning Li, and Tianzi Zhao are with the School of Computer Science and Technology, Harbin Institute of Technology, China (e-mail: 24b903081@stu.hit.edu.cn; Xinlong Zhao is with the China Mineral Resources Group Big Data Co., Ltd, China (e-mail: xinlong.zhao@qq.com).




The Final Layer Holds the Key: A Unified and Efficient GNN Calibration Framework

Huang, Jincheng, Xu, Jie, Shi, Xiaoshuang, Hu, Ping, Feng, Lei, Zhu, Xiaofeng

arXiv.org Artificial Intelligence

Graph Neural Networks (GNNs) have demonstrated remarkable effectiveness on graph-based tasks. However, their predictive confidence is often miscalibrated, typically exhibiting under-confidence, which harms the reliability of their decisions. Existing calibration methods for GNNs normally introduce additional calibration components, which fail to capture the intrinsic relationship between the model and the prediction confidence, resulting in limited theoretical guarantees and increased computational overhead. To address this issue, we propose a simple yet efficient graph calibration method. We establish a unified theoretical framework revealing that model confidence is jointly governed by class-centroid-level and node-level calibration at the final layer. Based on this insight, we theoretically show that reducing the weight decay of the final-layer parameters alleviates GNN under-confidence by acting on the class-centroid level, while node-level calibration acts as a finer-grained complement to class-centroid level calibration, which encourages each test node to be closer to its predicted class centroid at the final-layer representations. Extensive experiments validate the superiority of our method.


Scaling Decentralized Learning with FLock

Cheng, Zehua, Sun, Rui, Sun, Jiahao, Guo, Yike

arXiv.org Artificial Intelligence

Fine-tuning the large language models (LLMs) are prevented by the deficiency of centralized control and the massive computing and communication overhead on the decentralized schemes. While the typical standard federated learning (FL) supports data privacy, the central server requirement creates a single point of attack and vulnerability to poisoning attacks. Generalizing the result in this direction to 70B-parameter models in the heterogeneous, trustless environments has turned out to be a huge, yet unbroken bottleneck. This paper introduces FLock, a decentralized framework for secure and efficient collaborative LLM fine-tuning. Integrating a blockchain-based trust layer with economic incentives, FLock replaces the central aggregator with a secure, auditable protocol for cooperation among untrusted parties. We present the first empirical validation of fine-tuning a 70B LLM in a secure, multi-domain, decentralized setting. Our experiments show the FLock framework defends against backdoor poisoning attacks that compromise standard FL optimizers and fosters synergistic knowledge transfer. The resulting models show a >68% reduction in adversarial attack success rates. The global model also demonstrates superior cross-domain generalization, outperforming models trained in isolation on their own specialized data.